Seybold Report ISSN: 1533-9211
P. Suneel Kumar
Professor, Department of Electronics and Communication Engineering, Sridevi Women’s Engineering College, Hyderabad, India, psunilkumar.ece@gmail.com
T. Vasanthi
U.G Student, Department of Electronics and Communication Engineering, Sridevi Women’s Engineering college, Hyderabad, India, vasanthiteki00@gmail.com
R. Akhila
U.G Student, Department of Electronics and Communication Engineering, Sridevi Women’s Engineering college, Hyderabad, India, akhilaresoju123@gmail.com
A. Sahithi
U.G Student, Department of Electronics and Communication Engineering, Sridevi Women’s Engineering college, Hyderabad, India, arravothusahithi@gmail.com
Vol 17, No 07 ( 2022 ) | DOI: 10.5281/zenodo.6879643 | Licensing: CC 4.0 | Pg no: 234-242 | Published on: 25-07-2022
Abstract
Decision Trees (DTs) are exuberantly used in machine learning (ML) because of their quick implementation and decipherability. As DT training is laborious, on this short, we suggested a hardware training accelerator to fasten the training procedure. The proposed training accelerator is carried out on the field programmable gate array (FPGA) having the most operating frequency of 62 MHz. The proposed structure uses a combination of parallel execution for training time deduction and pipelined execution to reduce resource intake. For a given design, the proposed application is found to be at the least 14 times faster than the C-based software programming. Besides, the proposed structure makes use of the single RESET signal to re-equip new set of data. This active training proves the hardware flexibility for any form of implementation.
Keywords:
Decision tree (DT), field-programmable gate array (FPGA), machine learning (ML), training accelerator, two means DT (TMDT).